Fixed-to-Variable Length Distribution Matching
|
|
- Shona Casey
- 6 years ago
- Views:
Transcription
1 Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: arxiv: v2 [cs.it] Jul 203 Abstract Fixed-to-variable length (f2v) atchers are used to reversibly transfor an input sequence of independent and uniforly distributed bits into an output sequence of bits that are (approxiately) independent and distributed according to a target distribution. The degree of approxiation is easured by the inforational divergence between the output distribution and the target distribution. An algorith is developed that efficiently finds optial f2v codes. It is shown that by encoding the input bits blockwise, the inforational divergence per bit approaches zero as the block length approaches infinity. A relation to data copression by Tunstall coding is established. I. INTRODUCTION Distribution atching considers the proble of apping uniforly distributed bits to sybols that are approxiately distributed according to a target distribution. In difference to the siulation of rando processes [] or the exact generation of distributions [2], distribution atching requires that the original bit sequence can be recovered fro the generated sybol sequence. We easure the degree of approxiation by the noralized inforational divergence (Idivergence), which is an appropriate easure when we want to achieve channel capacity of noisy and noiseless channels [3, Sec & Chap. 6] by using a atcher. A related work is [4], [3, Chap. 3], where it is shown that variable-to-fixed length (v2f) atching is optially done by geoetric Huffan coding and the relation to fixed-to-variable length (f2v) source encoders is discussed. In the present work, we consider binary distribution atching by prefix-free f2v codes. A. Rooted Trees With Probabilities We use the fraework of rooted trees with probabilities [5], [6]. Let T be the set of all binary trees with 2 leaves and consider soe tree T T. Index all nodes by the nubers N := {, 2, 3,... } where is the root. Note that there are at least N 2 + nodes in the tree, with equality if the tree is coplete. A tree is coplete if any right-infinite binary sequence starts with a path fro the root to a leaf. Let L N be the set of leaf nodes and let B := N \ L be the set of branching nodes. Probabilities can be assigned to the tree by defining a distribution over the 2 paths through the tree. For each i N, denote by P T (i) the probability that a path is chosen that passes through node i. Since each path ends at a different leaf node, P T defines a leaf distribution, i.e., This work was supported by the Geran Ministry of Education and Research in the fraework of an Alexander von Huboldt Professorship. U T (0) = 3 4 U T () = U 2 T (0) = U T (2) = 3 4 U 2 T () = 3 U 4 T (0) = 2 4 U T (4) = 2 U 4 T () = 2 3 U T (3) = UT () = 4 4 (a) tree probabilities defined by a leaf distribution Q T () = 2 Q T (2) = 2 3 q = 3 5 U T (5) = 4 4 Q T (4) = 4 9 q = 3 3 Q T (3) = q = 3 3 (b) tree probabilities defined by branching distributions 5 Q T (5) = U T (6) = 4 U T (7) = 4 Q T (6) = 8 27 Q T (7) = 4 27 Fig.. A rooted tree with probabilities. The set of branching nodes is B = {, 2, 4} and the set of leaf nodes is L = {3, 5, 6, 7}. In (a), a unifor leaf distribution U T is chosen, i.e., for each i L, U T (i) =. The leaf 4 distribution deterines the node probabilities and branching distributions. In (b), identical branching distributions are chosen, i.e., for each i B, Q i T = Q, where Q(0) =: and Q() =: q =. The branching 3 distributions deterine the resulting node probabilities, which we denote by Q T (i). Since the tree is coplete, Q T (i) = and Q T defines a leaf distribution. P T (i) =. For each branching node i B, denote by PT i the branching distribution, i.e., the probabilities of branch 0 and branch after passing through node i. The probabilities on the tree are copletely defined either by defining the branching distributions {PT i, i B} or by defining the leaf distribution {P T (i), i L}. See Fig. for an exaple.
2 B. v2f Source Encoding and f2v Distribution Matching Consider a binary distribution Q with q = Q(0), q = Q(), 0 < q 0 <, and a binary tree T with 2 leaves. Let Q T (i), i N, be the node probabilities that result fro having all branching distributions equal to Q, i.e. Q i T = Q for each i B. See Fig. (b) for an exaple. Let U T be a unifor leaf distribution, i.e., U T (i) = 2 for each i L, see Fig. (a) for an exaple. We use the tree as a v2f source code for a discrete eoryless source (DMS) Q. To guarantee lossless copression, the tree for a v2f source encoder has to be coplete. Consequently, Q T defines a leaf distribution, i.e., Q T (i) =. We denote the set of coplete binary trees with 2 leaves by C. Each code word consists of log 2 2 = bits and the resulting entropy rate at the encoder output is H(Q T ) = Q T (i)[ log 2 Q T (i)] = Q T (i)[ log 2 Q T (i) + log 2 U T (i) log 2 U T (i)] = D(Q T U T ) () where H(Q T ) is the entropy of the leaf distribution defined by Q T and where D(Q T U T ) is defined accordingly. Fro (), we conclude that the objective is to solve in D(Q T U T ). (2) The solution is known to be attained by Tunstall coding [7]. The tree in Fig. is a Tunstall code for Q:, q = 3 and = 2 and the corresponding v2f source encoder is , 00 0, 0 0,. (3) The dual proble is f2v distribution atching. Q is now a binary target distribution and we generate the codewords defined by the paths through a (not necessarily coplete) binary tree uniforly according to U T. For exaple, the f2v distribution atcher defined by the tree in Fig. is , 0 00, 0 0,. (4) Denote by l i, i L the path lengths and let L be a rando variable that is uniforly distributed over the path lengths according to U T. We want the I-divergence per output bit of U T and Q T to be sall, i.e., we want to solve in T T. (5) In contrast to (2), the iniization is now over the set of all (not necessarily coplete) binary trees with 2 leaves. Note that although for a non-coplete tree we have Q T (i) <, the proble (5) is well-defined, since there is always a coplete tree with leaves L L and Q T (i) =. The su in (5) is over the support of U T, which is L. Solving (5) is the proble that we consider in this work. C. Outline In Sec. II and Sec. III, we restrict attention to coplete trees. We show that Tunstall coding applied to Q iniizes and that iteratively applying Tunstall coding to weighted versions of Q iniizes /. In Sec. IV we derive conditions for the optiality of coplete trees and show that the I-divergence per bit can be ade arbitrarily sall by letting the blocklength approach infinity. Finally, in Sec. V, we illustrate by an exaple that source decoders are sub-optial distribution atchers and vice-versa, distribution deatchers are sub-optial source encoders. II. MINIMIZING I-DIVERGENCE Let R be the set of real nubers. For a finite set S, we say that W : S R is a weighted distribution if for each i S, W (i) > 0. We allow for i S W (i). The I-divergence of a distribution P and a weighted distribution W is D(P W ) = i supp P P (i) log 2 P (i) W (i) where supp denotes the support of P. The reason why we need this generalization of the notion of distributions and I- divergence will becoe clear in the next section. Proposition. Let Q be a weighted binary target distribution, and let (6) T = argin (7) be an optial coplete tree. Then we find that i. An optial coplete tree T can be constructed by applying Tunstall coding to Q. ii. If 0 q 0 and 0 q, then T also iniizes aong all possibly non-coplete binary trees T, i.e., the optial tree is coplete. Proof: Part i. We write and hence = argin 2 log 2 2 Q T (i) = 2 = argax log Q T (i) (8) log 2 Q T (i) (9) Consider now an arbitrary coplete tree T C. Since the tree is coplete, there exist (at least) two leaves that are siblings, say j and j +. Denote by k the corresponding branching node. The contribution of these two leaves to the objective function on the right-hand side of (9) can be written as log 2 Q T (j) + log 2 Q T (j + ) = log[q T (k)q 0 ] + log[q T (k)q ] = log Q T (k) + log Q T (k) + log q 0 + log q. (0)
3 Now consider the tree T that results fro reoving the nodes j and j +. The new set of leaf nodes is L = k L\{j, j +} and the new set of branching nodes is B = B \ k. Also Q T defines a weighted leaf distribution on L. The sae procedure can be applied repeatedly by defining T = T, until T consists only of the root node. We use this idea to re-write the objective function of the right-hand side of (9) as follows. log 2 Q T (i) = log 2 Q T (i) + log 2 Q T (k) + log 2 q 0 + log 2 q = log 2 Q T (k) + (2 )[log 2 q 0 + log 2 q ]. () k B Since (2 )[log 2 q 0 + log 2 q ] is a constant independent of the tree T, we have argax log 2 Q T (i) = argax log 2 Q T (k). (2) k B The right-hand side of (2) is clearly axiized by the coplete tree with the branching nodes with the greatest weighted probabilities. According to [8, p. 47], this is exactly the tree that is constructed when Tunstall coding is applied to the weighted distribution Q. Part ii. We now consider q 0 and q. Assue we have constructed a non-coplete binary tree. Because of noncopleteness, we can reove a branch fro the tree. Without loss of generality, assue that this branch is labeled by a zero. Denote by S the leaves on the subtree of the branch. Denote the tree after reoving the branch by T. Now, Q T (i) = Q T (i) q 0 Q T (i), for each i S (3) where the inequality follows because by assuption q 0. Thus, for the new tree T, the objective function () is bounded as log 2 Q T (i) = log 2 Q T (i) + Q T (i) log 2 q 0 i S \S log 2 Q T (i). (4) In suary, under the assuption q 0 and q, the objective function () that we want to axiize does not decrease when reoving branches, which shows that there is an optial coplete tree. This proves the stateent ii. of the proposition. III. MINIMIZING I-DIVERGENCE PER BIT The following two propositions relate the proble of iniizing the I-divergence per bit to the proble of iniizing the un-noralized I-divergence. Let T T be soe set of binary trees with 2 leaves and define := in. (5) T T Proposition 2. We have T := argin = argin D(U T Q T ) (6) T T T T where Q T is the weighted distribution induced by Q 2 := [q 0 2, q 2 ]. Proof: By (5), for any tree T T, we have with equality if T = T (7) 0 We write the left-hand side of (8) as with equality if T = T (8) = U T (i) Q T (i) U T (i)l i = [ ] U T (i) Q T (i) log 2 2 li = U T (i). (9) li Q T (i)2 Consider the path through the tree that ends at leaf i. Denote by l 0 i and l i the nuber of ties the labels 0 and occur, respectively. The length of the path can be expressed as l i = l 0 i + l i. The ter Q T (i)2 li can now be written as Q T (i)2 li = q l0 i 0 ql i 2 (l0 i +l i ) = (q 0 2 ) l0 i (q 2 ) l i = Q T (i). (20) Using (20) and (9) in (8) shows that for any binary tree T T we have U T (i) Q T (i) 0 with equality if T = T (2) which is the stateent of the proposition. Proposition 3. Define := in Then the optial coplete tree T := argin. (22) is constructed by applying Tunstall coding to Q T. (23) Proof: The proposition is a consequence of Prop. 2 and Prop..i.
4 Algorith. ˆT argin solved by Tunstall coding on Q repeat. ˆ = D(U ˆT Q ˆT ) E U ˆT [ D(UT Q T ) ˆ ] Tunstall on Q 2 ˆ 2. ˆT = argin until D(U ˆT Q ˆT ) ˆ E U ˆT = 0 = ˆ, T = ˆT A. Iterative Algorith By Prop. 3, if we know the I-divergence, then we can find T by Tunstall coding. However, is not known a priori. We solve this proble by iteratively applying Tunstall coding to Q 2 ˆ, where ˆ is an estiate of and by updating our estiate. This procedure is stated in Alg.. Proposition 4. Alg. finds (, T ) as defined in Prop. 3 in finitely any steps. Proof: The proof is siilar to the proof of [3, Prop. 4.]. We first show that is strictly onotonically decreasing. Let ˆ i be the value that is assigned to ˆ in step. of the ith iteration and denote by ˆT i the value that is assigned to ˆT in step 2. of the ith iteration. Suppose that the algorith does not terinate in the ith iteration. We have ˆ i = D(U ˆTi Q ˆTi ) E U ˆTi D(U ˆTi Q ˆTi ) ˆ i E U ˆTi = 0. (24) By step 2, we have [ ˆT i = argin D(UT Q T ) ˆ i ] (25) and since by our assuption the algorith does not terinate in the ith iteration, we have D(U ˆTi Q ˆTi ) ˆ i E U ˆTi < 0 D(U ˆTi Q ˆTi ) E U ˆTi < ˆ i ˆ i+ < ˆ i. (26) Now assue the algorith terinated, and let ˆT be the tree after terination. Because of the assignents in steps. and 2., the terinating condition iplies that for any tree T C, we have ˆ 0, with equality if T = ˆT. (27) Consequently, we have ˆ, with equality if T = ˆT. (28) We conclude that after terination, (, ˆT ) is equal to the optial tuple (, T ) in Prop. 3. Finally, we have shown that is strictly onotonically decreasing so that ˆTi ˆT j for all i < j. But there is only a finite nuber of coplete binary trees with 2 leaves. Thus, the algorith terinates after finitely any steps. IV. OPTIMALITY OF COMPLETE TREES Coplete trees are not optial in general: Consider = and Q: q 0 = 5 6, q = 6. For =, Tunstall coding constructs the (unique) coplete binary tree T with 2 leaves, independent of which target vector we pass to it. The path lengths are l = l 2 =. The I-divergence per bit achieved by this is = 2 log 2(q 0 q ) = bits. (29) Now, we could instead use a non-coplete tree T with the paths 0 and 0. In this case, I-divergence per bit is = 2 log 2(q 0 q q 0 ) 2 ( + 2) = bits. (30) In suary, for the considered exaple, using a coplete tree is sub-optial. We will in the following derive siple conditions on the target vector Q that guarantee that the optial tree is coplete. A. Sufficient Conditions for Optiality Proposition 5. Let Q be a distribution. If ax{q 0, q } 4 in{q 0, q }, then the optial tree is coplete for any and it is constructed by Alg.. Proof: According to Prop..ii, the tree that iniizes D(U T Q T ) is coplete if the entries of the weighted distribution Q 2 are both less than or equal to one. Without loss of generality, assue that q 0 q. Thus, we only need to check this condition for q 0. We have q 0 2 log 2 q log 2 q 0. (3) We calculate the value of that is achieved by the (unique) coplete tree with 2 leaves, naely = = 2 log 2(q 0 q ). (32) For each, this is achieved by the coplete tree with all path lengths equal to. Substituting the right-hand side of (32) for in (3), we obtain 2 log 2(q 0 q ) log 2 q 0 + log 2 (q 0 q ) 2 log2 q 0 2 q 0 q q 0 4q q 0 (33) which is the condition stated in the proposition.
5 B. Asyptotic Achievability for Coplete Trees Proposition 6. Denote by T () the coplete tree with 2 leaves that is constructed by applying Alg. to a target distribution Q. Then we have log 2 in{q 0,q } (34) and in particular, the I-divergence per bit approaches zero as. Proof: The expected length can be bounded by the converse of the Coding Theore for DMS [8, p. 45] as Thus, we have H[U T ()] =. (35) in T () C D(U T () Q T ()). (36) The tree T () that iniizes the right-hand side is found by applying Tunstall coding to Q. Without loss of generality, assue that q 0 q. According to the Tunstall Lea [8, p. 47], the induced leaf probability of a tree constructed by Tunstall coding is lower bounded as Q T ()(i) 2 q, for each leaf i L. (37) We can therefore bound the I-divergence as D(U T () Q T ()) = 2 log 2 2 Q T (i) 2 log q = log 2 q. (38) We can now bound the I-divergence per bit as This proves the proposition. q log 2 q log 2. (39) C. Optiality of Coplete Trees for Large Enough Proposition 7. For any target distribution Q with q 0 < and q = q 0, there is an 0 such that for all > 0, the tree that iniizes (40) is coplete. Proof: Without loss of generality, assue that q 0 q. By Prop. 6, we have log 2 q. Thus, there exists an 0 such that log 2 q q 2 q 0 2 q 0 2, for all > 0. (4) Thus, for all 0, both entries of Q T () are saller than. The proposition now follows by Prop. 2 and Prop..ii. TABLE I COMPARISON OF V2F SOURCE CODING AND F2V DISTRIBUTION MATCHING: Q: q 0 = 0.65, q = 0.385; = 2 v2f source encoder Tunstall on Q Alg. on Q redundancy D(Q T U T ) f2v distribution atcher I-divergence per bit V. SOURCE CODING VERSUS DISTRIBUTION MATCHING An ideal source encoder transfors the output of a DMS Q into a sequence of bits that are independent and uniforly distributed. Reversely, applying the corresponding decoder to a sequence of uniforly distributed bits generates a sequence of sybols that are iid according to Q. This suggests to design a f2v distribution atcher by first calculating the optial v2f source encoder. The inverse apping is f2v and can be used as a distribution atcher. We illustrate by an exaple that this approach is suboptial in general. Consider the DMS Q with Q: q 0 = 0.65, q = We calculate the optial binary v2f source encoder with blocklength = 2 by applying Tunstall coding to Q. The resulting encoder is displayed in the st colun of Table I. Using the source decoder as a distribution atcher results in an I-divergence per bit of bits. Next, we use Alg. to calculate the optial f2v atcher for Q. The resulting apping is displayed in the 2nd colun of Table I. The achieved I-divergence per bit is bits, which is saller than the value obtained by using the source decoder. In general, the decoder of an optial v2f source encoder is a sub-optial f2v distribution atcher and the deatcher of an optial v2f distribution atcher is a sub-optial v2f source encoder. REFERENCES [] Y. Steinberg and S. Verdu, Siulation of rando processes and ratedistortion theory, IEEE Trans. Inf. Theory, vol. 42, no., pp , 996. [2] D. Knuth and A. Yao, The coplexity of nonunifor rando nuber generation. New York: Acadeic Press, 976, pp [3] G. Böcherer, Capacity-achieving probabilistic shaping for noisy and noiseless channels, Ph.D. dissertation, RWTH Aachen University, 202. [Online]. Available: capacityachievingshaping.pdf [4] G. Böcherer and R. Mathar, Matching dyadic distributions to channels, in Proc. Data Copression Conf., 20, pp [5] R. A. Rueppel and J. L. Massey, Leaf-average node-su interchanges in rooted trees with applications, in Counications and Cryptography: Two sides of One Tapestry, R. E. Blahut, D. J. Costello Jr., U. Maurer, and T. Mittelholzer, Eds. Kluwer Acadeic Publishers, 994. [6] G. Böcherer, Rooted trees with probabilities revisited, Feb [Online]. Available: [7] B. Tunstall, Synthesis of noiseless copression codes, Ph.D. dissertation, 967. [8] J. L. Massey, Applied digital inforation theory I, lecture notes, ETH Zurich. [Online]. Available: assey scr/adit.pdf
Achievable Rates for Shaped Bit-Metric Decoding
Achievable Rates for Shaped Bit-Metric Decoding Georg Böcherer, Meber, IEEE Abstract arxiv:40.8075v6 [cs.it] 8 May 06 A new achievable rate for bit-etric decoding (BMD) is derived using rando coding arguents.
More informationRooted Trees with Probabilities Revisited
Rooted Trees with Probabilities Revisited Georg öcherer Institute for Communications Engineering Technische Universität München Email: georg.boecherer@tum.de arxiv:1302.0753v1 [cs.it] 4 Feb 2013 February
More informationThis model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.
CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when
More informationASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical
IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October
More informationRandomized Recovery for Boolean Compressed Sensing
Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch
More informationSolutions 1. Introduction to Coding Theory - Spring 2010 Solutions 1. Exercise 1.1. See Examples 1.2 and 1.11 in the course notes.
Solutions 1 Exercise 1.1. See Exaples 1.2 and 1.11 in the course notes. Exercise 1.2. Observe that the Haing distance of two vectors is the iniu nuber of bit flips required to transfor one into the other.
More informationA Simple Regression Problem
A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where
More information13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices
CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay
More informationCSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13
CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture
More informationESE 523 Information Theory
ESE 53 Inforation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electrical and Systes Engineering Washington University 11 Urbauer Hall 10E Green Hall 314-935-4173 (Lynda Marha Answers) jao@wustl.edu
More informationDistance Optimal Target Assignment in Robotic Networks under Communication and Sensing Constraints
Distance Optial Target Assignent in Robotic Networks under Counication and Sensing Constraints Jingjin Yu CSAIL @ MIT/MechE @ BU Soon-Jo Chung Petros G. Voulgaris AE @ University of Illinois Supported
More informationFundamentals of Image Compression
Fundaentals of Iage Copression Iage Copression reduce the size of iage data file while retaining necessary inforation Original uncopressed Iage Copression (encoding) 01101 Decopression (decoding) Copressed
More informationOptimal Jamming Over Additive Noise: Vector Source-Channel Case
Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper
More informationGeneralized Alignment Chain: Improved Converse Results for Index Coding
Generalized Alignent Chain: Iproved Converse Results for Index Coding Yucheng Liu and Parastoo Sadeghi Research School of Electrical, Energy and Materials Engineering Australian National University, Canberra,
More informationSharp Time Data Tradeoffs for Linear Inverse Problems
Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used
More informationLower Bounds for Quantized Matrix Completion
Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &
More informationMaximum Entropy Interval Aggregations
Maxiu Entropy Interval Aggregations Ferdinando Cicalese Università di Verona, Verona, Italy Eail: cclfdn@univr.it Ugo Vaccaro Università di Salerno, Salerno, Italy Eail: uvaccaro@unisa.it arxiv:1805.05375v1
More informationCombining Classifiers
Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/
More informationSupport recovery in compressed sensing: An estimation theoretic approach
Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de
More informationAn Algorithm for Quantization of Discrete Probability Distributions
An Algorith for Quantization of Discrete Probability Distributions Yuriy A. Reznik Qualco Inc., San Diego, CA Eail: yreznik@ieee.org Abstract We study the proble of quantization of discrete probability
More informationOn Constant Power Water-filling
On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,
More informationComputational and Statistical Learning Theory
Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher
More informationNew upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.
New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing
More informationNon-Parametric Non-Line-of-Sight Identification 1
Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,
More informationConstant-Space String-Matching. in Sublinear Average Time. (Extended Abstract) Wojciech Rytter z. Warsaw University. and. University of Liverpool
Constant-Space String-Matching in Sublinear Average Tie (Extended Abstract) Maxie Crocheore Universite de Marne-la-Vallee Leszek Gasieniec y Max-Planck Institut fur Inforatik Wojciech Rytter z Warsaw University
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationTight Information-Theoretic Lower Bounds for Welfare Maximization in Combinatorial Auctions
Tight Inforation-Theoretic Lower Bounds for Welfare Maxiization in Cobinatorial Auctions Vahab Mirrokni Jan Vondrák Theory Group, Microsoft Dept of Matheatics Research Princeton University Redond, WA 9805
More informationCompression and Predictive Distributions for Large Alphabet i.i.d and Markov models
2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511
More informationOn Conditions for Linearity of Optimal Estimation
On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at
More informationLecture October 23. Scribes: Ruixin Qiang and Alana Shine
CSCI699: Topics in Learning and Gae Theory Lecture October 23 Lecturer: Ilias Scribes: Ruixin Qiang and Alana Shine Today s topic is auction with saples. 1 Introduction to auctions Definition 1. In a single
More informationOn the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation
journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation
More informationE0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis
E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds
More informationTight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography
Tight Bounds for axial Identifiability of Failure Nodes in Boolean Network Toography Nicola Galesi Sapienza Università di Roa nicola.galesi@uniroa1.it Fariba Ranjbar Sapienza Università di Roa fariba.ranjbar@uniroa1.it
More informationLec 05 Arithmetic Coding
Outline CS/EE 5590 / ENG 40 Special Topics (7804, 785, 7803 Lec 05 Arithetic Coding Lecture 04 ReCap Arithetic Coding About Hoework- and Lab Zhu Li Course Web: http://l.web.ukc.edu/lizhu/teaching/06sp.video-counication/ain.htl
More informationQuantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search
Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths
More informationAlgorithms for parallel processor scheduling with distinct due windows and unit-time jobs
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and
More informatione-companion ONLY AVAILABLE IN ELECTRONIC FORM
OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer
More informationError Exponents in Asynchronous Communication
IEEE International Syposiu on Inforation Theory Proceedings Error Exponents in Asynchronous Counication Da Wang EECS Dept., MIT Cabridge, MA, USA Eail: dawang@it.edu Venkat Chandar Lincoln Laboratory,
More informationReed-Muller codes for random erasures and errors
Reed-Muller codes for rando erasures and errors Eanuel Abbe Air Shpilka Avi Wigderson Abstract This paper studies the paraeters for which Reed-Muller (RM) codes over GF (2) can correct rando erasures and
More informationConvolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder
Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with
More informationComputational and Statistical Learning Theory
Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic
More informationConsistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material
Consistent Multiclass Algoriths for Coplex Perforance Measures Suppleentary Material Notations. Let λ be the base easure over n given by the unifor rando variable (say U over n. Hence, for all easurable
More informationInteractive Markov Models of Evolutionary Algorithms
Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary
More informationVulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links
Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp
More informationA Probabilistic and RIPless Theory of Compressed Sensing
A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,
More informationUsing EM To Estimate A Probablity Density With A Mixture Of Gaussians
Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points
More informationFairness via priority scheduling
Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation
More informationList Scheduling and LPT Oliver Braun (09/05/2017)
List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationA Bernstein-Markov Theorem for Normed Spaces
A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :
More informationLecture 21 Nov 18, 2015
CS 388R: Randoized Algoriths Fall 05 Prof. Eric Price Lecture Nov 8, 05 Scribe: Chad Voegele, Arun Sai Overview In the last class, we defined the ters cut sparsifier and spectral sparsifier and introduced
More informationHamming Compressed Sensing
Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing
More informationDesign of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding
IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas
More information3.8 Three Types of Convergence
3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to
More informationThe Inferential Complexity of Bayesian and Credal Networks
The Inferential Coplexity of Bayesian and Credal Networks Cassio Polpo de Capos,2 Fabio Gagliardi Cozan Universidade de São Paulo - Escola Politécnica 2 Pontifícia Universidade Católica de São Paulo cassio@pucsp.br,
More informationFeature Extraction Techniques
Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that
More informationThe Weierstrass Approximation Theorem
36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined
More informationarxiv: v1 [cs.ds] 3 Feb 2014
arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/
More informationA Note on Online Scheduling for Jobs with Arbitrary Release Times
A Note on Online Scheduling for Jobs with Arbitrary Release Ties Jihuan Ding, and Guochuan Zhang College of Operations Research and Manageent Science, Qufu Noral University, Rizhao 7686, China dingjihuan@hotail.co
More informationPattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition
More informationAsynchronous Gossip Algorithms for Stochastic Optimization
Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu
More informationStochastic Subgradient Methods
Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods
More informationBlock designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationComputable Shell Decomposition Bounds
Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung
More informationMixed Robust/Average Submodular Partitioning
Mixed Robust/Average Subodular Partitioning Kai Wei 1 Rishabh Iyer 1 Shengjie Wang 2 Wenruo Bai 1 Jeff Biles 1 1 Departent of Electrical Engineering, University of Washington 2 Departent of Coputer Science,
More informationUniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval
Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,
More informationIterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel
1 Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel Rai Cohen, Graduate Student eber, IEEE, and Yuval Cassuto, Senior eber, IEEE arxiv:1510.05311v2 [cs.it] 24 ay 2016 Abstract In
More informationRecovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)
Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains
More informationThe Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate
The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost
More informationMulti-Dimensional Hegselmann-Krause Dynamics
Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory
More informationQuantile Search: A Distance-Penalized Active Learning Algorithm for Spatial Sampling
Quantile Search: A Distance-Penalized Active Learning Algorith for Spatial Sapling John Lipor, Laura Balzano, Branko Kerkez, and Don Scavia 3 Departent of Electrical and Coputer Engineering Departent of
More informationChapter 6 1-D Continuous Groups
Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:
More informationIn this chapter, we consider several graph-theoretic and probabilistic models
THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions
More informationDistributed Subgradient Methods for Multi-agent Optimization
1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions
More informationCompressive Sensing Over Networks
Forty-Eighth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Septeber 29 - October, 200 Copressive Sensing Over Networks Soheil Feizi MIT Eail: sfeizi@it.edu Muriel Médard MIT Eail: edard@it.edu
More informationBayesian Learning. Chapter 6: Bayesian Learning. Bayes Theorem. Roles for Bayesian Methods. CS 536: Machine Learning Littman (Wu, TA)
Bayesian Learning Chapter 6: Bayesian Learning CS 536: Machine Learning Littan (Wu, TA) [Read Ch. 6, except 6.3] [Suggested exercises: 6.1, 6.2, 6.6] Bayes Theore MAP, ML hypotheses MAP learners Miniu
More informationOn the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs
On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing
More informationA PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010
A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065
More informationarxiv: v1 [cs.ds] 17 Mar 2016
Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationImpact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels
This full text paper was peer reviewed at the direction of IEEE Counications Society subject atter experts for publication in the IEEE ICC 9 proceedings Ipact of Iperfect Channel State Inforation on ARQ
More informationIEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 60, NO. 2, FEBRUARY ETSP stands for the Euclidean traveling salesman problem.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 60, NO., FEBRUARY 015 37 Target Assignent in Robotic Networks: Distance Optiality Guarantees and Hierarchical Strategies Jingjin Yu, Meber, IEEE, Soon-Jo Chung,
More informationThe Hilbert Schmidt version of the commutator theorem for zero trace matrices
The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that
More informationAN APPLICATION OF INCOMPLETE BLOCK DESIGNS. K. J. C. Smith University of North Carolina. Institute of Statistics Mimeo Series No. 587.
AN APPLICATION OF INCOMPLETE BLOCK DESIGNS TO THE CONSTRUCTION OF ERROR-CORRECTING CODES by K. J. C. Sith University of North Carolina Institute of Statistics Mieo Series No. 587 August, 1968 This research
More information1 Proof of learning bounds
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a
More informationFundamental Limits of Database Alignment
Fundaental Liits of Database Alignent Daniel Cullina Dept of Electrical Engineering Princeton University dcullina@princetonedu Prateek Mittal Dept of Electrical Engineering Princeton University pittal@princetonedu
More informationA remark on a success rate model for DPA and CPA
A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance
More informationNonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy
Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo
More information1 Rademacher Complexity Bounds
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #10 Scribe: Max Goer March 07, 2013 1 Radeacher Coplexity Bounds Recall the following theore fro last lecture: Theore 1. With probability
More informationA Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair
Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving
More informationQuantile Search: A Distance-Penalized Active Learning Algorithm for Spatial Sampling
Quantile Search: A Distance-Penalized Active Learning Algorith for Spatial Sapling John Lipor 1, Laura Balzano 1, Branko Kerkez 2, and Don Scavia 3 1 Departent of Electrical and Coputer Engineering, 2
More informationNew Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors
New Slack-Monotonic Schedulability Analysis of Real-Tie Tasks on Multiprocessors Risat Mahud Pathan and Jan Jonsson Chalers University of Technology SE-41 96, Göteborg, Sweden {risat, janjo}@chalers.se
More informationMachine Learning Basics: Estimators, Bias and Variance
Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics
More informationtime time δ jobs jobs
Approxiating Total Flow Tie on Parallel Machines Stefano Leonardi Danny Raz y Abstract We consider the proble of optiizing the total ow tie of a strea of jobs that are released over tie in a ultiprocessor
More informationA Markov Framework for the Simple Genetic Algorithm
A arkov Fraework for the Siple Genetic Algorith Thoas E. Davis*, Jose C. Principe Electrical Engineering Departent University of Florida, Gainesville, FL 326 *WL/NGS Eglin AFB, FL32542 Abstract This paper
More informationLearnability and Stability in the General Learning Setting
Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu
More informationarxiv: v2 [math.co] 3 Dec 2008
arxiv:0805.2814v2 [ath.co] 3 Dec 2008 Connectivity of the Unifor Rando Intersection Graph Sion R. Blacburn and Stefanie Gere Departent of Matheatics Royal Holloway, University of London Egha, Surrey TW20
More informationSupplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion
Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish
More information